Sponsored Article

The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The Evolution, Convergence and Cooling of AI & HPC Gear
Nov. 7, 2024

Years ago, when Artificial Intelligence (AI) began to emerge as a potential technology to be harnessed as a powerful tool to change the way the world works, organizations began to kick the AI tires by exploring it’s potential to enhance their research or business. However, to get started with AI, neural networks needed to be created, data sets trained, and microprocessors were needed that could perform matrix-multiplication calculations ideally suited to perform these computationally demanding tasks. Enter the accelerator.


News Feed

PNNL Leads the Latest SciDAC Institute

The new institute will focus on making scientific machine learning accessible to domain scientists Dec. 11, 2025 — The LEarning-Accelerated Domain Science (LEADS) institute has been selected as the newest Scientific Discovery through Advanced Computing (SciDAC) institute supported by the Department of Energy’s Advanced Scientific Computing Research program. Panos Stinis, leader of the Computational Mathematics group at Pacific Northwest National Laboratory, will direct […]

The post PNNL Leads the Latest SciDAC Institute appeared first on HPCwire.

Here’s What’s Inside DOE’s $320 Million Genesis Mission Investment

The Department of Energy (DOE) this week unveiled its first round of funding for Genesis Mission, the government’s new AI-for-science framework that’s intended to solidify the US’s lead in the burgeoning sector. Included in the overall $320 million spending plan is $40 million for the new American Science Cloud and $30 million Transformational AI Models […]

The post Here’s What’s Inside DOE’s $320 Million Genesis Mission Investment appeared first on HPCwire.

Oracle Is Using OpenAI To Build A Platform For The Enterprise

Did people complain – and by people, we mean Wall Street – as the world’s largest bookseller invested huge amounts of money to transform itself into an alternative to driving to Wal-Mart?

Oracle Is Using OpenAI To Build A Platform For The Enterprise was written by Timothy Prickett Morgan at The Next Platform.

Personal ‘AI Supercomputer’ Runs 120B-Parameter LLMs On-Device, Tiiny AI Says

It's often said that supercomputers of a few decades ago pack less power than today's smart watches. Now we have a company, Tiiny AI Inc., claiming to have built the world's smallest personal AI supercomputer that can run a 120-billion-parameter large language model on-device — without cloud connectivity, servers or GPUs.

The post Personal ‘AI Supercomputer’ Runs 120B-Parameter LLMs On-Device, Tiiny AI Says appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

Siemens and GlobalFoundries to Collaborate on AI-Driven Chip Manufacturing

Siemens and GlobalFoundries (GF) said today they have entered a collaboration to use their AI capabilities to enhance semiconductor manufacturing and advanced industries . In a memorandum of understanding, the companies say they will focus on automation technologies for semiconductor fabrication, electrification, digital solutions and software ranging from chip development to product lifecycle management. This […]

The post Siemens and GlobalFoundries to Collaborate on AI-Driven Chip Manufacturing appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

Driving HPC Performance Up Is Easier Than Keeping The Spending Constant

We are still mulling over all of the new HPC-AI supercomputer systems that were announced in recent months before and during the SC25 supercomputing conference in St Louis, particularly how the slew of new machines announced by the HPC national labs will be advancing not just the state of the art, but also pushing down the cost of the FP64 floating point operations that still drives a lot of HPC simulation and modeling work.

Driving HPC Performance Up Is Easier Than Keeping The Spending Constant was written by Timothy Prickett Morgan at The Next Platform.

TOP500 News



The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The List

11/2025 Highlights

On the 66th edition of the TOP500 El Capitan remains No. 1 and JUPITER Booster becomes the fourth Exascale system.

The JUPITER Booster system at the EuroHPC / Jülich Supercomputing Centre in Germany at No. 4 submitted a new measurement of 1.000 Exflop/s on the HPL benchmark. It is the fourth Exascale system on the TOP500 and the first one outside of the USA.

El Capitan, Frontier, and Aurora are still leading the TOP500. All three are installed at DOE laboratories in the USA.

The El Capitan system at the Lawrence Livermore National Laboratory, California, USA remains the No. 1 system on the TOP500. The HPE Cray EX255a system was remeasured with 1.809 Exaflop/s on the HPL benchmark. LLNL also achieved 17.41 Petaflop/s on the HPCG benchmark which makes the system the No. 1 on this ranking as well.

El Capitan has 11,340,000 cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. It uses the Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 60.9 Gigaflops/watt.

read more »

List Statistics